15

According to the Wikipedia article on GCR, the Commodore 1541 disk drive used a particularly efficient GCR encoding scheme to cram 170K onto the same 5.25" disks that in an Apple drive only stored 140K.

However, for the Amiga 3.5" disks, they reverted to MFM, a lower density encoding. Why abandon something that worked so well?

According to Amiga Disk Encoding Schemes MFM? GCR? Please explain! by Betty Clay,

[GCR] would appear to let a disk hold far more information than could be stored under most other methods. However, since GCR permits the use of as many as eight on-bits in a row, the drive cannot interpret them at full speed. It is necessary to write or read at only half the normal speed, in order to insure accuracy. When the writing speed is slowed to four microseconds per bit instead of the normal two, the density of the data is only half as much, cutting drastically into the storage advantage.

That would indeed seem to eliminate the advantage of GCR. but then why did that disadvantage not apply to its use for 5.25" disks? Is it a limitation of the rate at which the electronics can process the bits, or of the physics of the interaction of the drive head with the magnetic fields?

CC BY-SA 4.0
2
  • 9
    The Amiga was designed by a different company and bought by Commodore. So it isn't like Commodore was abandoning what was done in the 1541. Amiga Inc. just wasn't basing their work on Commodore's computers.
    – vacawama
    Jan 24, 2019 at 14:22
  • 2
    As @vacawama notes, the Amiga wasn't Commodore - it was actually from the same group of people who designed the Atari 2600 and the Atari 400/800. If you consider those the first two generations of a platform, the Amiga would be the 3rd. But as for the floppy disk, the Atari 800's 810 and 1050 drives were - you guessed it - MFM and not GCR.
    – bjb
    Aug 14, 2021 at 1:58

7 Answers 7

20

On a floppy disk, each 'bit' is a flux reversal — a magnetic event. If those bits are too close together, they'll leak into one another and data will be lost.

Disk controllers use a regular clock and either write a transition or write nothing at each clock tick.

There's also a lower limit on how far apart transitions can be. Disk rotation speed varies according to the whims of the motor, aerodynamic drag, etc, and drives contain automatic gain controls — if they think they aren't seeing data but should be, they turn up their own volume.

So bits need to be regular enough that the controller doesn't have to make too many guesses about rotation speed, and the drive doesn't turn up its gain so far that it's reading noise.

As a result, the bit patterns that drives actually write are a translation of the bytes to be stored into some other encoding, that guarantees bits aren't too far apart, and aren't too close together.

FM and the GCR schemes solve for too far apart differently, but use the same solution for ensuring bits can't be too close together: their data clock is picked so that each tick is far enough apart that two will never be too close. The GCR schemes then do a better job of making sure that they're not too far apart than does FM: FM encoding uses two output bits per input bit, but e.g. Apple's second GCR encoding uses only eight output bits for six inputs.

MFM is a later development than GCR and provides a different solution to the too-close-together problem: it guarantees that there are no sequences that lead to two bits being output consecutively. So, you can double the data clock without fear of magnetic collision. Like FM it also produces two output bits per input, but those two fit into the same physical space as one FM bit. Hence: double density.

MFM is an equally valid improvement for 5.25" drives as it is for any other, and is better than both company's GCRs; the reason that Apple and Commodore each came up with GCR schemes is that they were coming up with something better than FM, not rejecting MFM — both companies released drives before MFM controllers were available.

CC BY-SA 4.0
1
  • 11
    MFM requires a higher bit rate and an ability to more accurately measure flux-transition widths. The design of the Disk-II controller is very much tied to the fact that the worst-case time for a branch-until-ready loop on the 6502 is slightly less than two bit times, and would require extra buffering if that were not the case.
    – supercat
    Jan 24, 2019 at 6:27
6

No answer yet told about the four different writing speeds the Commodore 8-Bit floppies used. That way, it could store

    Track   Sectors/track   Sum Sectors   Storage in Bytes/track
    -----   -------------   -----------   ----------------------
     1-17        21            357             7820
    18-24        19            133             7170
    25-30        18            108             6300
    31-40(*)     17             85             6020
                               ---
                               683 (for a 35 track image)

while one-speed FM/MFM floppies have to use the smaller number of bytes per track on the outer tracks, too.

The only other system I know which did that was an Olivetti PC with FM floppies. Totally incompatible to anything. Other than the Commodore 8-Bit floppies, it changed the physical drive speed instead of the writing speed.

CC BY-SA 4.0
1
  • Similar: Victor 9000 / Sirius 1 (designed by Chuck Peddle) used variable speed floppy drives with nine different physical speeds. I think a 5.25" floppy held 600 KB one-sided, 1.2 MB dual-sided. I used such a system around 1982/1983.
    – njuffa
    Oct 19, 2021 at 21:01
5

I think that the reason was simple: amiga used standard (well, nearly standard) double-density disk drives. This is untrue for 1541, that used non-standard disk drive (consisting of only mechanical parts) and quite sophisticated controller circuity, that includes analog parts (read and write amplifiers, disk motor driver).

The standard drive was optimized for standard bitrate -- so no even emulation of CLV by switching bit frequency is possible -- as 1541 did.

The MFM/GCR difference is actually less important part since again disk drive is not aware of the specific encoding. If you wish, you can use GCR encoding with amiga as well -- because amiga FDC does not do any MFM encoding or decoding. That is done by either CPU or blitter.

CC BY-SA 4.0
2

Aside from the technical aspects: when early floppy systems came out, disks were expensive and higher capacity was a selling point. By the time that 3½" disks were popular, the media was cheaper and it wasn't worth the effort of having complex controllers to save a few bytes.

CC BY-SA 4.0
2

Well, I asked an expert. The Apple II's 3.5 inch floppy drive was/is GCR. I think I'm right in saying that the most you could put onto an Amiga disc and it work on ALL Amigas was 837K and the Apple II could store 1440K.

I talked to an old friend (she wrote Jaguar XJ220 on Amiga & ST for example) and she explained the lengths she had gone to to successfully produce a duplicable GCR master for an Amiga game. She had to write it on a modified Atari Falcon drive. The first thing she did was to find out how many bytes would fit onto each track and she would use almost all of it.

She organised it so that when the drive moved 1 track out (data ALWAYS went from inside to outside), the next sector sought would be 20-40% of a rotation away so it was faster and almost silent.

The Amiga WOULD read GCR files if put into GCR mode but without changing the hardware (disc controller chip), it couldn't write it.

I should add that some Amiga drives could reliably store the data raw. I think quite a few people 'discovered' this and thought that they would get rich by patenting a trick to double the capacity of Amiga discs with software. Sadly, it didn't work on them all.

I note that some third-party 'high density' floppy drives were produced and I think the A4000 drive WAS high density.

Just recently I read about the genius who made the C64s 1541 drive load x12 faster using only software. One key discovery was that reads from the floppy didn't have to be timed to the microsecond i.e. the:

wait_loop:
          BVS wait-loop

i.e. the overflow of the 6510 was repurposed to detect reads.

When the overflow was cleared, it meant that there was a complete byte ready to be stored. Well, he used this:

loop
        bvc     loop
        clv
        lda     $1c01
        sta     ($30),y
        iny
        bne     loop

As I understand it, he had an outer-loop so in total, it ALWAYS read 2560 bytes in one go and transferred them as-is to the C64 itself. The C64 then worked out where the first byte of the first sector was and using some tables (about 2.5K of them)

Seems so incredibly sad that Commodore could almost certainly have spent a bit longer and come up with something like this. If you think about it, the C64 actually had more than 64K of RAM. The VIC chip had a colour attribute map that only used bits 0-3 (I am guessing that in fact it was 1024 nybbles) and so while not QUITE as fast, they could have recouped that memory and mapped it to $c000 (for example).

I expect that the low speeds involved mean that impedance wasn't a problem. My hat goes off to the designers of USB 3.0 but isn't it interesting that it uses 8b/10b encoding.

CC BY-SA 4.0
1
  • 1
    I wonder why the C64 hardware grouped incoming bits into octets rather than groups of five? A state machine to group things in fives with hardware to disable counting when more than eight consecutive one bits have been received without an intervening zero would have made encoding and decoding much easier.
    – supercat
    Oct 20, 2021 at 20:00
1

I recall that one 'fast disc' algorithm for the 1541 saw the data sent to the C64 in GCR format and it was converted using look-up tables. I think I am correct in saying that two 1-bits could exist next to each other but not two 0-bits. A 1 is stored in a change in the rotation of the magnetic field, a 1 was no change.

The 1541 took its timings from the disc rotation. I remember that the V (overflow) flag of the 1541s 2MHz 6510 had been hotwired so that when it was waiting for a bit to read, it used a single instruction:

.Wait
     BVS .wait

I think the idea in both formats is to avoid the need for a disc-controller. The 1541 used a 6510, the Amiga used the Paula chip. MANY people tried to format Amiga floppies in GCR format so I would argue that the space really was considered useful.

CC BY-SA 4.0
2
  • 1
    Thanks for the answer, but I don't remember any efforts to use GCR on Amiga. Larger disk capacity seemed to be from attempting extra tracks or using compression.
    – scruss
    Aug 11, 2021 at 18:23
  • In general, disk hardware will limit the minimum and maximum number of zero bits that can appear between 1 bits. For FM, or Manchester encoding which write 2 bits per data bit, the limits are 0 and 1. For Apple 13-sector GCR which wrote 8 bits for 5 data bits, the limits were also 0 and 1; for Apple 16-sector GCR which wrote 8 bits per 6 data bits, or Commodore GCR which wrote 5 bits per 4 data bits, the limits were 0 and 2. For MFM which writes 2 bits per data bit, the limits are 1 and 3, but guaranteeing at least one 0 bit between 1 bits allows the data rate to be doubled.
    – supercat
    Aug 12, 2021 at 17:37
1

BTW on close examination of GCR, it's possible to encode 10 bits into 11 bits. It would require the controller to switch between 2 decode tables but put simply, if a GCR code ends with a 1, the next code could begin with 2 zero bits.

I appreciate that it is merely of historic interest but I am old enough to recall 5¼ floppy drives and allied with larger sectors, I think over 200K per side would have been possible.

Since I grew up with a C64 and actually did do some code for the 1541's processor (it used a 6510 @ 2MHz as the controller), it's a shame Commodore didn't use it's flexibility. That said - the serial connector was usually the bottleneck (although the clock bit could be used to send data as well. Imagine - 2 BIT PARALLEL!

CC BY-SA 4.0

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .